Population Structure / Pangenome Description
In this example we continue with the Escherichia coli and Shigella high quality dataset. You can download a copy from https://doi.org/10.6084/m9.figshare.13049654). We continue after the QC described above.
Core Genome analysis
PATO implements a set of tools to inspect the core genome. It includes the pangenome composition (core, accessory, pangenome) and the function to create a core-genome alignment.
We can inspect the Pangenome composition of our dataset with:
cp <- core_plots(ecoli_mm,reps = 10, threshold = 0.95, steps = 10)
PATO can build a core-genome alignment to use with external software such as IQTree[http://www.iqtree.org/], RAxML-NG[http://www.exelixis-lab.org/software.html], FastTree[http://www.microbesonline.org/fasttree/] or other phylogenetic inference software. The core genome is computed based on a mmseq object, so the definition of core-genome depends of the parameters used in that step.
In this example we build a core-genome aligment of the ~800 samples of non-redundant result.
nr_files = nr_list %>%
as.data.frame() %>%
group_by(cluster) %>%
top_n(1,centrality) %>%
summarise_all(first) %>%
select(Source) %>%
distinct()
We have selected the best sample of each cluster (max centrality value). Some cluster has several samples with the same centrality value, so we take one of them (the first one). Now we run the core genome for this list of samples.
core <- mmseqs(nr_files) %>% core_genome(type = "prot")
export_core_to_fasta(core,"core.aln")
Some time, when you are using public data, core-genome can be smaller than you expect. Citating Andrew Page, creator of Roary (https://sanger-pathogens.github.io/Roary/)
I downloaded a load of random assemblies from GenBank. Why am I seeing crazy results?
Gene prediction software rarely completely agrees, with differing opinions on the actual start of a gene, or which of the overlapping open reading frames is actually the correct one, etc. As a result, if you mix these together you can get crazy results. The solution is to reannotate all of the genomes with a single method (like PROKKA). Otherwise you will waste vast amounts of time looking at noise, errors, and the batch effect.
other times you can have a problem of outliers
The exported files are in Multi-alignment FASTA format and ca be use with most of the Phylogenetic tools. In this case we used FastTree for phylogenetic inference
fasttreeMP core.aln > core.tree
And then we can read the output file to import to our R environment and plot the result.
library(phangorn)
library(ggtree)
core_tree = read.tree("core.tree")
annot_tree = species %>%
filter(!grepl("Citrobacter",organism_name)) %>%
filter(!grepl("marmotae",organism_name)) %>%
select(Target,organism_name) %>% distinct()
core_tree %>% midpoint %>% ggtree(layout = "circular") %<+% annot_tree + geom_tippoint(aes(color = organism_name))
In this case we have added the species information extracted above. 
You must take into account that Maximun Likelyhood tree can take long computational times.
Finally, you can obtain a SNPs matrix from the coregenome alignment. The matrix shown the number of total SNPs (or changes in the case of proteins alignments) shared among the samples. The matrix can be normalized by the total length of the genome (in magabases) or can be raw numer of variants.
var_matrix = core_snps_matrix(core, norm = TRUE)
pheatmap::pheatmap(var_matrix,show_rownames = F, show_colnames = F)
Accessory Genome Analysis
I this case we are creating the accessory genome taking those proteins presence in no more than 80% of the genomes.
ecoli_accnet_all <- accnet(ecoli_mm,threshold = 0.8, singles = FALSE)
As we have shown above, PATO has an object type to accessory genome accnet and functions to analyze and visualize the content of the accessory genome and the relationship between genes/proteins and the genomes. We can visualize an accnet object as a bipartite network. Commonly, AccNET networks are very large, so, we recommend to visualize the networks using Gephi. We do not recommend to try visualize AccNET networks with more than 1.000 genomes.
One of the most interesting things when we analyze an accessory genome is try to find what genes/proteins are over-represented in a set of genomes. accnet_enrichment_analysis() function analyze what genes/proteins are over-represented in a cluster in comparisson with the population (i.e. in the wholem dataset). The clusters definition can be external (any catagorical metadata such as Sequence Type, Source, Serotype etc…) or internal using some clustering proccess. We have to take into account, that the redundancies can bias this kind of analysis. If there are an over-representation of samples, for example an outbreak, in your dataset the results could be bias. You will find more significant genes/proteins because the diversity of the dataset is not homegenously distributed. For this reason we recommend to use a non redundant set of samples. In this example we select 800 non redundant genomes.
ec_nr<- non_redundant(ecoli_mash,number = 800 )
ecoli_accnet_nr <- extract_non_redundant(ecoli_accnet, ef_nr)
ec_800_cl <- clustering(ecoli_accnet_nr, method = "mclust", d_reduction = TRUE)
Now we can visualize the network using Gephi.
export_to_gephi(ecoli_accnet_nr, "accnet800", cluster = ec_800_cl)
[https://doi.org/10.6084/m9.figshare.13089338]
To perform the enrichment analysis we use:
accnet_enr_result <- accnet_enrichment_analysis(ecoli_accnet_nr, cluster = ef_800_cl)
# A tibble: 1,149,041 x 14
Target Source Cluster perClusterFreq ClusterFreq ClusterGenomeSize perTotalFreq TotalFreq OdsRatio pvalue padj AccnetGenomeSize AccnetProteinSi… Annot
<chr> <chr> <dbl> <dbl> <int> <int> <dbl> <int> <dbl> <dbl> <dbl> <int> <int> <chr>
1 1016|NZ_CP053592.… GCF_000009565.1_ASM956v… 1 0.0436 13 298 0.0173 20 2.52 3.62e-5 0.00130 1156 74671 ""
2 1016|NZ_CP053592.… GCF_000022665.1_ASM2266… 1 0.0436 13 298 0.0173 20 2.52 3.62e-5 0.00130 1156 74671 ""
3 1016|NZ_CP053592.… GCF_000023665.1_ASM2366… 1 0.0436 13 298 0.0173 20 2.52 3.62e-5 0.00130 1156 74671 ""
4 1016|NZ_CP053592.… GCF_000830035.1_ASM8300… 1 0.0436 13 298 0.0173 20 2.52 3.62e-5 0.00130 1156 74671 ""
5 1016|NZ_CP053592.… GCF_000833145.1_ASM8331… 1 0.0436 13 298 0.0173 20 2.52 3.62e-5 0.00130 1156 74671 ""
6 1016|NZ_CP053592.… GCF_001039415.1_ASM1039… 1 0.0436 13 298 0.0173 20 2.52 3.62e-5 0.00130 1156 74671 ""
7 1016|NZ_CP053592.… GCF_001596115.1_ASM1596… 1 0.0436 13 298 0.0173 20 2.52 3.62e-5 0.00130 1156 74671 ""
8 1016|NZ_CP053592.… GCF_009663855.1_ASM9663… 1 0.0436 13 298 0.0173 20 2.52 3.62e-5 0.00130 1156 74671 ""
9 1016|NZ_CP053592.… GCF_009832985.1_ASM9832… 1 0.0436 13 298 0.0173 20 2.52 3.62e-5 0.00130 1156 74671 ""
10 1016|NZ_CP053592.… GCF_013166955.1_ASM1316… 1 0.0436 13 298 0.0173 20 2.52 3.62e-5 0.00130 1156 74671 ""
# … with 1,149,031 more rows
[https://doi.org/10.6084/m9.figshare.13089314.v2]
Now, we can export a new network with the adjusted p-values as edge-weigth.
accnet_with_padj(accnet_enr_result) %>% export_to_gephi("accnet800.padj", cluster = ec_800_cl)
[https://doi.org/10.6084/m9.figshare.13089401.v1]
PATO also include some functions to study the genes/proteins distribution: singles(), twins() and conicidents().
singles() finds those genes/proteins that are only present in a sample (genome, pangenome…).
twins() finds those genes/proteins that have the same connections (i.e. genes/proteins present in the same genomes).
coincidents() finds those genes/proteins with similar connections (i.e. genes/proteins that usually are together)
The differencies between twins() and concidents() are that coincidents() is more flexible and therefore less sensitive to outliers. twins() is faster and accurate but is very sensitive to noise or outliers because just one missing conection (for example a bad prediction of a protein in a genome) remove automatically that proteins from the twin group. On other hand, coincidents() is slower, and some time the results are to much generalistic giving big cluster of pseudo-core.
Population structure
PATO includes tools for population structure search and comparisson. PATO can analyse the population structure of the whole-genome (MASH based) or the accessory structure (AccNET based)
We can visualize our dataset as a dendrogram (i.e a tree). PATO allows to visualize both mash and accnet data as a tree. In the caso of accnet data, first PATO calculate a distance matrix using the presence/absence matrix in combination with Jaccard distance. The function similarity_tree() implement different methods to build the dendrogram. Phylogenetic aproaches such as Neighbour Joining, FastME:Minimun Evolution and hierarchical clustering method such us complete linkage, UPGMA, Ward’s mininum variance, and WPGMC.
mash_tree_fastme <- similarity_tree(ecoli_mash)
mash_tree_NJ <- similarity_tree(ecoli_mash, method = "NJ")
mash_tree_upgma <- similarity_tree(ecoli_mash,method = "UPGMA")
accnet_tree_upgma <- similarity_tree(ecoli_accnet,method = "UPGMA")
The output has a phylo format, so can be visualize with external packages as ggtree.
mash_tree_fastme %>% midpoint %>% ggtree(layout = "circular") %<+% annot_tree + geom_tippoint(aes(color = organism_name))
Using others external packages, we can compare the arragement of the pangenom (mash data) against the accessory genome (accnet)
library(dendextend)
tanglegram(ladderize(mash_tree_upgma), ladderize(accnet_tree_upgma), fast = TRUE, main = "Mash vs AccNET UPGMA")

Some Maximun Likelyhood inference trees software accept,as input, binary data (0-1) alignments.So, we can use accessory data (accnet) to infere a tree (non-phylogenetic) with this data. This is similar to the similarity_tree() but instead to be based on distance metrics it’s based on ML principles. To export this alignment you can use export_accnet_aln().
And then you can use it as input alignment
Again, we can import to R and plot it.
acc_tree = read.tree("acc.aln.treefile")
acc_tree %>% midpoint.root() %>% ggtree()
This kind of alignments can be very large. Most of the times accessory genomes contains a lot of spurious genes/proteins that do not add information to the alignment. For this reason export_accnet_aln() has a parameter, min_freq, to filter the genes/proteins by their frequency. This option cut significantly the alingment length and improves computational times.
PATO has a set of function to visualize data. We have saw the tree but algo implements methods to visualize as networks the relationships among genomes. K-Nearest Neighbour Networks is a representation of the realtionship of the data in a network plot. This function (knnn()) builds a network linking each genome with their K best neighbours instead of chose a distance threshold. This approach minimize the clutering of the network and tries to create a connected network. The method can chose the K best neighbours with or without repetitions. That means that if repeats=FALSE each node are linked with the best K neighbours that does be not linked yet with the node.
# K-NNN with 10 neighbours and with repetitions
knnn_mash_10_w_r <- knnn(ecoli_mash,n=10, repeats = TRUE)
# K-NNN with 25 neighbours and with repetitions
knnn_mash_25_w_r <- knnn(ecoli_mash,n=25, repeats = TRUE)
# K-NNN with 50 neighbours and with repetitions
knnn_mash_50_w_r <- knnn(ecoli_mash,n=50, repeats = TRUE)
knnn() function returns an igraph object that can be visualize with several packages. However, PATO implements its own function plot_knnn_network() that uses threejs and the own igraph for layouting and plotting the network. Due to the common size of the network we strongly recommend to use this function or use the external software Gephi (https://gephi.org/). You can use the function export_to_gephi() to export these networks to Gephi.
export_to_gephi(knnn_mash_50_w_r,file = "knnn_50_w_r.tsv")
If you want, you can use the internal function plot_knnn_network() to visualize the network. This funtion uses igraph layouts algorith to arrange the network and threejs package to draw and explore the network.
plot_knnn_network(knnn_mash_50_w_r)
PATO includes a different clustering methods for different kind of data (objects). clustering() function joins all the different approaches to make easier the process. For mash and accnetobjects we have the following methods:
- mclust: It perform clustering using Gaussian Finite Mixture Models. It could be combine with d_reduction. This method uses Mclust package. It has been implemented to find the optimal cluster number
- upgma: It perform a Hierarchical Clustering using UPGMA algorithm. The user must provide the number of cluster.
- ward.D2: It perform a Hierarchical Clustering using Ward algorithm. The user must provide the number of cluster.
- hdbscan: It perform a Density-based spatial clustering of applications with noise using DBSCAN package. It find the optimal number of cluster.
Any of the above methods is compatible with a multidimentsional scaling (MDS). PATO performs the MDS using UMAP algorithm (Uniform Manifold Approximation and Projection). UMAP is a tools for MDS, similar to other machine learning MDS technics as t-SNE or Isomap. Dimension reduction algorithms tend to fall into two categories those that seek to preserve the distance structure within the data and those that favor the preservation of local distances over global distance. Algorithms such as PCA , MDS , and Sammon mapping fall into the former category while t-SNE, Isomap or UMAP,fall into the later category. This kind of MDS combine better with the clustering algorithms due to clustering process try to find local structures.
ec_cl_mclust_umap <- clustering(ecoli_mash, method = "mclust",d_reduction = TRUE)
On other hand, clustering() can handle knnn networks as input. In this caso, PATO uses network clustering algorithms such us:
- greedy: Community structure via greedy optimization of modularity
- louvain: This method implements the multi-level modularity optimization algorithm for finding community structure
- walktrap: Community strucure via short random walks
ec_cl_knnn <-clustering(knnn_mash_50_w_r, method = "louvain")
Whatever the method used, the output has allways the same structure: a data.frame with two columns c("Source","Cluster"). The reason of this format is to be compatible with the rest of the data, being able to combine with the rest of the objects using Source variable as key.
To visualize the clustering data or just to see the data structure we can use umap_plot() function. The functions performs a umap reduction and plot the results. We can include the clustering results as a parameter and visualize it.
umap_mash <- umap_plot(ecoli_mash)
umap_mash <- umap_plot(ecoli_mash, cluster = ef_cl_mclust_umap)
We also can use plot_knnn_network() to visualize the network clustering results.
cl_louvain = clustering(knnn_mash_25_wo_r, method = "louvain")
plot_knnn_network(knnn_mash_25_wo_r, cluster = cl_louvain, edge.alpha = 0.1)
Annotation
PATO has a function to annotate the Antibiotic Resistance Genes and the Virulence Factor: annotate(). Antibiotic resistance are predicted using MMSeqs2 over ResFinder database (https://cge.cbs.dtu.dk/services/ResFinder/) (doi: 10.1093/jac/dks261). For Virulence Factors we use VFDB (http://www.mgc.ac.cn/cgi-bin/VFs/v5/main.cgi)(doi: 10.1093/nar/gky1080). VFBD has two sets of genes VF_A the core-dataset and VF_B the full dataset. Core dataset includes genes associated with experimentally verified VFs only, whereas full dataset covers all genes related to known and predicted VFs in the database.
annotate() results is a table with all positive hits of each gene/protein of the dataset (files). annotate() can re-use the results of mmseqs() to accelerate the proccess. In the same way, the query can be all genes/proteins or accessory genes/proteins. The results are quite raw so user must curate the table. We recommend to use the tidyverse tools. It is very easy to obtain a greatfull results.
library(tidyverse)
annotation <- annotate(files, type = "prot",database = c("AbR","VF_A"))
annotation %>%
filter(pident > 0.95 ) %>% #remove all hits with identity lower than 95%
filter(evalue < 1e-6) %>% #remove all hits with E-Value greater than 1e-6
group_by(Genome,Protein) %>% arrange(desc(bits)) %>% slice_head() #select only the best hit for each protein-genome.
PATO includes also a function to create a heapmap with the annotation:
heatmap_of_annotation(annotation %>% filter(DataBase =="AbR"), #We select only "AbR" results
min_identity = 0.99)
Annotation HeatMap
Or to visualizae as a network.
network_of_annotation(annotation %>% filter(DataBase =="AbR"), min_identity = 0.99) %>% export_to_gephi("annotation_Network")
[https://doi.org/10.6084/m9.figshare.13121795]
Outbreak / Transmission / Description
One of the main goals in microbial genomics is try to find outbreaks or direct transmission among different patients or sources. Basically, that we want to do is find the same strain (or very similar) if different subjects/sources. Commonly that mean to find two (or more) strains with less than a few SNPs between them.
For this example we are going to use the dataset published in [https://msphere.asm.org/content/5/1/e00704-19] In this paper, authors performed whole-genome comparative analyses on 60 E. coli isolates from soils and fecal sources (cattle, chickens, and humans) in households in rural Bangladesh. We have prepared tha dataset to this example. You can download the data set here [https://doi.org/10.6084/m9.figshare.13482435.v1].
PATO implements different ways to find an Outbreak/Transmission. The most standard way was, calculate the core-genome, create the phylogenetic tree and calculate the number of SNPs among the strains (or eventually the ANI). We can use four different alternatives:
- MASH similarity
- Core-genome + snp_matrix (roary-like)
- Core_snp_genome + snp_matrix (snippy-like)
- Snps-pairwaise (most expensive but most accurate)
First we download the data and decompress in a folder. In my case ~/examplePATO folder. We set that folder as working directory.
setwd("~/examplePATO/)
The Montealegre et al. dataset contains 60 genomes in GFF3 format including the sequence in FASTA format. We load the genomes into PATO.
gff_files <- dir("~/examplesPATO/", pattern = "\\.gff", full.names =T) ##Creates a file-list
gffs <- load_gff_list(gff_files) ##Load the genomes
We also creates a file with the real names of each sample. Using bash command line we going to extract the names of each sample.
grep -m 1 'strain' *.gff | sed 's/;/\t/g' | sed 's/:/\t/g' | cut -f1,22,23 | sed 's/nat-host=.*\t//' | sed 's/strain=//' > names.txt
Now we load the names into R.
strain_names <- read.table("names.txt")%>% #Load the files
rename(Genome = V1, Name = V2) %>% #Rename the columns
mutate(Name = gsub("-","",Name)) %>% #delete the '-' character
mutate(Sample = str_sub(Name,1,4)) %>% #Extract the first 4 character as Sample name
mutate(Source = str_sub(Name,5)) %>% #Extract the final characters as Source
mutate(Genome = str_replace(Genome,"_genomic.gff","")) #delete the final part of the filename
The samples has named with the name of the household number and the source (H: Human, C: Cattle, CH: Chicken, S: Soil)
Mash Similarity.
The fastest (and easy way) to find high similarities is use MASH distance.
mash <- mash(gffs, type ="wgs", n_cores = 20)
mash_tree <- similarity_tree(mash,method = "fastme") %>% phangorn::midpoint()
Now using ggtree we can add the names info into the tree.
# remove '_genomic.fna' from the tip.labels to fix with the strain_names table
mash_tree$tip.label <- gsub("_genomic.fna","",mash_tree$tip.label)
ggtree(mash_tree) %<+% strain_names + geom_tippoint(aes(color=Source)) + geom_tiplab(aes(label = Sample))
We can inspect the similarity values. Remember that the MASH distance is equivalent to \(D_{mash} = {1-\frac{ANI}{100}}\)
mash$table %>%
mutate(Source = gsub("_genomic.fna","",Source)) %>% #Remove extra characters
mutate(Target = gsub("_genomic.fna","",Target)) %>%
inner_join(strain_names %>% #Add real names to Source
rename(Source2 = Name) %>%
select(Genome,Source2),
by= c("Source" = "Genome")) %>%
inner_join(strain_names %>% #Add real names to Target
rename(Target2 = Name) %>%
select(Genome,Target2),
by= c("Target" = "Genome")) %>%
filter(Source != Target) %>% #Remove diagonal
mutate(ANI = (1-Dist)*100) %>% #Compute the ANI
filter(ANI > 99.9) %>% #Filter hits higher than 99.9% identity
as.data.frame() #Transform to data.frame to show all digits of ANI
Source2 Target2 ANI
1 HH08H HH29S 99.94674
2 HH15CH HH29CH 99.91693
3 HH29CH HH15CH 99.91693
4 HH29S HH08H 99.94674
We must take into account that MASH uses the whole genome to estimate the distance. That means that the accessory genomes (plasmids, transposons phages…) count to calculate the distance.
Usually we use number SNPs as measure of clonality. We going to compare the MASH way to SNPs approaches.
Core-genome + snp_matrix (roary-like)
In this case we going to use the PATO pipeline to compute the core-genome and the number of snps among the samples.
mm <- mmseqs(gffs, type = "nucl")
core <- core_genome(mm,type = "nucl", n_cores = 20)
export_core_to_fasta(core,file = "pato_roary_like.fasta")
#Externaly compute the phylogenetic tree
system("fasttreeMP -nt -gtr pato_roary_like.fasta > pato_roary_like.tree")
#Import and rooting the tree
pato_roary_tree <- ape::read.tree("pato_roary_like.tree") %>% phangorn::midpoint()
#Fix the tip labels
pato_roary_tree$tip.label <- gsub("_genomic.ffn","",pato_roary_tree$tip.label)
#Draw the tree with the annotation.
ggtree(pato_roary_tree) %<+% strain_names + geom_tippoint(aes(color=Source)) + geom_tiplab(aes(label = Sample))
Now we can compute the SNPs matrix
pato_roary_m = core_snps_matrix(core, norm = T, rm.gaps = T) #compute the SNP matrix removing columns with gaps
#Fix the colnames and rownames to put the real names.
colnames(pato_roary_m) <- rownames(pato_roary_m) <- gsub("_genomic.ffn","",rownames(pato_roary_m))
tmp = pato_roary_m %>%
as.data.frame() %>%
rownames_to_column("Genome") %>%
inner_join(strain_names) %>%
column_to_rownames("Name") %>%
select(-Genome,-Source,-Sample) %>%
as.matrix()
colnames(tmp) = rownames(tmp)
#Plot the matrix as a heatmap
pheatmap::pheatmap(tmp,
display_numbers = matrix(ifelse(tmp < 200,tmp,""), nrow(tmp)), #To Display only values lower than 200 SNPs
number_format = '%i',
annotation_row = strain_names %>% select(-Genome,-Sample) %>% column_to_rownames("Name"),
annotation_col = strain_names %>% select(-Genome,-Sample) %>% column_to_rownames("Name"),
show_rownames = T,
show_colnames = T,
)
We can inspect the matrix manually in the console.
tmp %>%
as.data.frame() %>%
rownames_to_column("Source") %>%
pivot_longer(-Source,names_to = "Target", values_to = "SNPs") %>%
filter(Source != Target) %>%
filter(SNPs < 200)
# A tibble: 10 x 3
Source Target SNPs
<chr> <chr> <dbl>
1 HH29S HH08H 1
2 HH29CH HH15CH 21
3 HH24H HH24CH 0
4 HH24CH HH24H 0
5 HH24C HH20C 72
6 HH20C HH24C 72
7 HH19S HH19CH 1
8 HH19CH HH19S 1
9 HH15CH HH29CH 21
10 HH08H HH29S 1
We find three possible transmissions HH24 from Human to Chicken (or vice versa) , from HH29 Soil to HH08 Human (or vice versa) and Human HH19 to chicken (or vice versa) that have less than 4 SNPs per Megababe. PATO compute the number of snps normalized by the length of the alignment in mega bases (i.e. the core-genome)
Core_snp_genome + snp_matrix (snippy-like)
Another option to find possible transmission is using core_snp_genome() function. In this case all the genomes are aligned against a reference. core_snp_genome() find the common regions among all the samples and extract the SNPs of that regions. This approach is very similar to Snippy (https://github.com/tseemann/snippy) pipeline.
core_s <- core_snp_genome(gffs,type = "wgs")
export_core_to_fasta(core_s,file = "pato_snippy_like.fasta")
#Externaly compute the phylogenetic tree
system("fasttreeMP -nt -gtr pato_snippy_like.fasta > pato_snippy_like.tree")
#Import and rooting the tree
pato_snippy_tree <- ape::read.tree("pato_snippy_like.tree") %>% phangorn::midpoint()
#Fix the tip labels
pato_snippy_tree$tip.label <- gsub("_genomic.fna","",pato_snippy_tree$tip.label)
#Draw the tree with the annotation.
ggtree(pato_snippy_tree) %<+% strain_names + geom_tippoint(aes(color=Source)) + geom_tiplab(aes(label = Sample))
Now the SNPs matrix
colnames(pato_snippy_m) <- rownames(pato_snippy_m) <- gsub("_genomic.fna","",rownames(pato_snippy_m))
tmp = pato_snippy_m %>%
as.data.frame() %>%
rownames_to_column("Genome") %>%
inner_join(strain_names) %>%
column_to_rownames("Name") %>%
select(-Genome,-Source,-Sample) %>%
as.matrix()
colnames(tmp) = rownames(tmp)
pheatmap(tmp,
display_numbers = matrix(ifelse(tmp < 200,tmp,""), nrow(tmp)),
number_format = '%i',
annotation_row = strain_names %>% select(-Genome,-Sample) %>% column_to_rownames("Name"),
annotation_col = strain_names %>% select(-Genome,-Sample) %>% column_to_rownames("Name"),
show_rownames = T,
show_colnames = T,
)
And a manual inspection
tmp %>%
as.data.frame() %>%
rownames_to_column("Source") %>%
pivot_longer(-Source,names_to = "Target", values_to = "SNPs") %>%
filter(Source != Target) %>%
filter(SNPs < 200)
# A tibble: 12 x 3
Source Target SNPs
<chr> <chr> <dbl>
1 HH29S HH08H 8
2 HH29CH HH15CH 28
3 HH24H HH24CH 8
4 HH24CH HH24H 8
5 HH24C HH20C 144
6 HH20C HH24C 144
7 HH19S HH19CH 11
8 HH19CH HH19S 11
9 HH16CH HH03H 91
10 HH15CH HH29CH 28
11 HH08H HH29S 8
12 HH03H HH16CH 91
With this approach the number of SNPs increase but the pair are the same HH24H<->HH24CH, HH29S<->HH08H and HH19S<->HH19CH
Snps-pairwaise (most expensive but most accurate)
The last two approaches have the same limitation: They depends on the dataset structure. In both cases we are counting the number of SNPs of the core-genome, but the core-genome depends of the dataset. If the data set contains an outlier or a very heterogeneous diversity the core genome could be narrowed and underestimate the number of SNPs, even normalizing by the alignment length. PATO implements a pairwise computation of SNPs number with the functions snps_pairwaise(). This function if computationally expensive because it perform an alignment among all the sequences \(O(n^2)\) in comparison with core_snp_genome() that cost \(O(n)\). We not recommend to use in dataset larger than 100 genomes (that supposed 10.000 alignments).
pw_normalized <- snps_pairwaise(gffs, type = "wgs",norm = T, n_cores = 20)
tmp = pw_normalized %>%
as.data.frame() %>%
rownames_to_column("Genome") %>%
inner_join(strain_names) %>%
column_to_rownames("Name") %>%
select(-Genome,-Source,-Sample) %>%
as.matrix()
colnames(tmp) = rownames(tmp)
pheatmap(tmp,
display_numbers = matrix(ifelse(tmp < 200,tmp,""), nrow(tmp)),
number_format = '%i',
annotation_row = strain_names %>% select(-Genome,-Sample) %>% column_to_rownames("Name"),
annotation_col = strain_names %>% select(-Genome,-Sample) %>% column_to_rownames("Name"),
show_rownames = T,
show_colnames = T,
)

And the manual inspection reveals that
tmp %>%
as.data.frame() %>%
rownames_to_column("Source") %>%
pivot_longer(-Source,names_to = "Target", values_to = "SNPs") %>%
filter(Source != Target) %>%
filter(SNPs < 200)
# A tibble: 10 x 3
Source Target SNPs
<chr> <chr> <dbl>
1 HH29S HH08H 8
2 HH29CH HH15CH 27
3 HH24H HH24CH 2
4 HH24CH HH24H 2
5 HH24C HH20C 97
6 HH20C HH24C 97
7 HH19S HH19CH 13
8 HH19CH HH19S 13
9 HH15CH HH29CH 27
10 HH08H HH29S 8
And that is the real number of SNPs per-megabase, excluding indels.
We can conclude that HH24 seems a recent transmission and HH29<->HH08 and HH19 appears a more distant transmission.
If we compare this results with the MASH results we see that does not correspond. Probably this is due to HGT elements (plasmids, phages, ICEs etc…) that in any of the SNPs approaches are not taken into account.
Pangenomes Analysis
PATO is designed even to analyze the relationship among pangenomes of different (or not) species. That’s mean to analyze cluster of genomes (pangenomes) among then as individual elements. In this example we analyze 49.591 firmicutes genomes from NCBI data base. For that kind of large datasets it is recommended to use the specific pipeline for pangenomes. This pipeline first cluster the genomes into pangenomes, for example, cluster of species or intra-specie phylogroups or even sequence types (ST). The diversity of each cluster depends on the parameter distance. The pipeline creates homogeneous (in phylogenetic distance) clusters. Then the pipeline produces a accnet object so all the above described function can be use with this object as a common accnet object. The pipeline also include parameters to set the minimum amount of genomes to consider a pangenome and the minimum frequency of a protein/gene family to be included in a pangenome.
res <- pangenomes_from_files(files,distance = 0.03,min_pange_size = 10,min_prot_freq = 2)
export_to_gephi(res,"/storage/tryPATO/firmicutes/pangenomes_gephi")
In this case we produce a gephi table to visulize the accnet network of the dataset. To annotate the network we use the NCBI assembly table and takes the species name of each pangenome cluster.
assembly = data.table::fread("ftp://ftp.ncbi.nlm.nih.gov/genomes/refseq/bacteria/assembly_summary.txt",sep = "\t",skip = 1, quote = "")
colnames(assembly) = gsub("# ","",colnames(assembly))
annot = res$members %>%
#mutate(file = basename(path)) %>%
separate(path,c("GCF","acc","acc2"),sep = "_", remove = FALSE) %>%
unite(assembly_accession,GCF,acc,sep = "_") %>%
left_join(assembly) %>%
separate(organism_name,c("genus","specie"), sep = " ") %>%
group_by(pangenome,genus,specie) %>%
summarise(N = n()) %>%
distinct() %>% ungroup() %>%
group_by(pangenome) %>%
slice_head() %>%
mutate(ID = paste("pangenome_",pangenome,"_rep_seq.fasta", sep = "",collapse = ""))
annot <- annot %>%
mutate(genus = gsub("\\[","",genus)) %>%
mutate(genus = gsub("\\]","",genus)) %>%
mutate(specie = gsub("\\[","",specie)) %>%
mutate(specie = gsub("\\]","",specie)) %>%
unite("Label",genus,specie, remove = F)
write_delim(annot,"../pangenomes_gephi_extra_annot.tsv",delim = "\t", col_names = TRUE)
We chose as species name the specie most frequent in each cluster. Then usin Gephi we visualiza the network.
Other way to visualize the results is a heatmap of the sharedness among the pangenoms. PATO includes the function sharedness to visualize this result. To visualize the data.table result from sharedness() function we can use the package pheatmap. As the input value of pheatmap can be only a matrix we must first transform our result into a matrix:
sh <- sharedness(res)
sh <- sh %>% as.data.frame() %>%
rownames_to_column("ID") %>%
inner_join(annot) %>%
unite(Name,genus,specie,pangenome, sep = "_") %>%
unite("Row_n",Label,N) %>%
select(-ID)%>%
column_to_rownames("Row_n")
colnames(sh) = rownames(sh)
Now we can execute pheatmap() with our pangenome dataset.
pheatmap::pheatmap(log2(sh+1),
clustering_method = "ward.D2",
clustering_distance_cols = "correlation",
clustering_distance_rows = "correlation")